63 research outputs found

    The Vigilance Decrement in Executive Function Is Attenuated When Individual Chronotypes Perform at Their Optimal Time of Day

    Get PDF
    Time of day modulates our cognitive functions, especially those related to executive control, such as the ability to inhibit inappropriate responses. However, the impact of individual differences in time of day preferences (i.e. morning vs. evening chronotype) had not been considered by most studies. It was also unclear whether the vigilance decrement (impaired performance with time on task) depends on both time of day and chronotype. In this study, morning-type and evening-type participants performed a task measuring vigilance and response inhibition (the Sustained Attention to Response Task, SART) in morning and evening sessions. The results showed that the vigilance decrement in inhibitory performance was accentuated at non-optimal as compared to optimal times of day. In the morning-type group, inhibition performance decreased linearly with time on task only in the evening session, whereas in the morning session it remained more accurate and stable over time. In contrast, inhibition performance in the evening-type group showed a linear vigilance decrement in the morning session, whereas in the evening session the vigilance decrement was attenuated, following a quadratic trend. Our findings imply that the negative effects of time on task in executive control can be prevented by scheduling cognitive tasks at the optimal time of day according to specific circadian profiles of individuals. Therefore, time of day and chronotype influences should be considered in research and clinical studies as well as real-word situations demanding executive control for response inhibition.This work was supported by the Spanish Ministerio de Ciencia e Innovación (Ramón y Cajal programme: RYC-2007-00296 and PLAN NACIONAL de I+D+i: PSI2010-15399) and Junta de Andalucía (SEJ-3054)

    When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    Get PDF
    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, na \u308\u131ve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception

    The influence of visual flow and perceptual load on locomotion speed

    Get PDF
    Visual flow is used to perceive and regulate movement speed during locomotion. We assessed the extent to which variation in flow from the ground plane, arising from static visual textures, influences locomotion speed under conditions of concurrent perceptual load. In two experiments, participants walked over a 12-m projected walkway that consisted of stripes that were oriented orthogonal to the walking direction. In the critical conditions, the frequency of the stripes increased or decreased. We observed small, but consistent effects on walking speed, so that participants were walking slower when the frequency increased compared to when the frequency decreased. This basic effect suggests that participants interpreted the change in visual flow in these conditions as at least partly due to a change in their own movement speed, and counteracted such a change by speeding up or slowing down. Critically, these effects were magnified under conditions of low perceptual load and a locus of attention near the ground plane. Our findings suggest that the contribution of vision in the control of ongoing locomotion is relatively fluid and dependent on ongoing perceptual (and perhaps more generally cognitive) task demands

    Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention

    Get PDF
    Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention

    The 5-Choice Continuous Performance Test: Evidence for a Translational Test of Vigilance for Mice

    Get PDF
    Attentional dysfunction is related to functional disability in patients with neuropsychiatric disorders such as schizophrenia, bipolar disorder, and Alzheimer's disease. Indeed, sustained attention/vigilance is among the leading targets for new medications designed to improve cognition in schizophrenia. Although vigilance is assessed frequently using the continuous performance test (CPT) in humans, few tests specifically assess vigilance in rodents.We describe the 5-choice CPT (5C-CPT), an elaboration of the 5-choice serial reaction (5CSR) task that includes non-signal trials, thus mimicking task parameters of human CPTs that use signal and non-signal events to assess vigilance. The performances of C57BL/6J and DBA/2J mice were assessed in the 5C-CPT to determine whether this task could differentiate between strains. C57BL/6J mice were also trained in the 5CSR task and a simple reaction-time (RT) task involving only one choice (1CRT task). We hypothesized that: 1) C57BL/6J performance would be superior to DBA/2J mice in the 5C-CPT as measured by the sensitivity index measure from signal detection theory; 2) a vigilance decrement would be observed in both strains; and 3) RTs would increase across tasks with increased attentional load (1CRT task<5CSR task<5C-CPT).C57BL/6J mice exhibited superior SI levels compared to DBA/2J mice, but with no difference in accuracy. A vigilance decrement was observed in both strains, which was more pronounced in DBA/2J mice and unaffected by response bias. Finally, we observed increased RTs with increased attentional load, such that 1CRT task<5CSR task<5C-CPT, consistent with human performance in simple RT, choice RT, and CPT tasks. Thus we have demonstrated construct validity for the 5C-CPT as a measure of vigilance that is analogous to human CPT studies

    Early predictors of impaired social functioning in male rhesus macaques (Macaca mulatta)

    Get PDF
    Autism spectrum disorder (ASD) is characterized by social cognition impairments but its basic disease mechanisms remain poorly understood. Progress has been impeded by the absence of animal models that manifest behavioral phenotypes relevant to ASD. Rhesus monkeys are an ideal model organism to address this barrier to progress. Like humans, rhesus monkeys are highly social, possess complex social cognition abilities, and exhibit pronounced individual differences in social functioning. Moreover, we have previously shown that Low-Social (LS) vs. High-Social (HS) adult male monkeys exhibit lower social motivation and poorer social skills. It is not known, however, when these social deficits first emerge. The goals of this study were to test whether juvenile LS and HS monkeys differed as infants in their ability to process social information, and whether infant social abilities predicted later social classification (i.e., LS vs. HS), in order to facilitate earlier identification of monkeys at risk for poor social outcomes. Social classification was determined for N = 25 LS and N = 25 HS male monkeys that were 1–4 years of age. As part of a colony-wide assessment, these monkeys had previously undergone, as infants, tests of face recognition memory and the ability to respond appropriately to conspecific social signals. Monkeys later identified as LS vs. HS showed impairments in recognizing familiar vs. novel faces and in the species-typical adaptive ability to gaze avert to scenes of conspecific aggression. Additionally, multivariate logistic regression using infant social ability measures perfectly predicted later social classification of all N = 50 monkeys. These findings suggest that an early capacity to process important social information may account for differences in rhesus monkeys’ motivation and competence to establish and maintain social relationships later in life. Further development of this model will facilitate identification of novel biological targets for intervention to improve social outcomes in at-risk young monkeys

    Scenes, saliency maps and scanpaths

    Get PDF
    The aim of this chapter is to review some of the key research investigating how people look at pictures. In particular, my goal is to provide theoretical background for those that are new to the field, while also explaining some of the relevant methods and analyses. I begin by introducing eye movements in the context of natural scene perception. As in other complex tasks, eye movements provide a measure of attention and information processing over time, and they tell us about how the foveated visual system determines what to prioritise. I then describe some of the many measures which have been derived to summarize where people look in complex images. These include global measures, analyses based on regions of interest and comparisons based on heat maps. A particularly popular approach for trying to explain fixation locations is the saliency map approach, and the first half of the chapter is mostly devoted to this topic. A large number of papers and models are built on this approach, but it is also worth spending time on this topic because the methods involved have been used across a wide range of applications. The saliency map approach is based on the fact that the visual system has topographic maps of visual features, that contrast within these features seems to be represented and prioritized, and that a central representation can be used to control attention and eye movements. This approach, and the underlying principles, has led to an increase in the number of researchers using complex natural scenes as stimuli. It is therefore important that those new to the field are familiar with saliency maps, their usage, and their pitfalls. I describe the original implementation of this approach (Itti & Koch, 2000), which uses spatial filtering at different levels of coarseness and combines them in an attempt to identify the regions which stand out from their background. Evaluating this model requires comparing fixation locations to model predictions. Several different experimental and comparison methods have been used, but most recent research shows that bottom-up guidance is rather limited in terms of predicting real eye movements. The second part of the chapter is largely concerned with measuring eye movement scanpaths. Scanpaths are the sequential patterns of fixations and saccades made when looking at something for a period of time. They show regularities which may reflect top-down attention, and some have attempted to link these to memory and an individual’s mental model of what they are looking at. While not all researchers will be testing hypotheses about scanpaths, an understanding of the underlying methods and theory will be of benefit to all. I describe the theories behind analyzing eye movements in this way, and various methods which have been used to represent and compare them. These methods allow one to quantify the similarity between two viewing patterns, and this similarity is linked to both the image and the observer. The last part of the chapter describes some applications of eye movements in image viewing. The methods discussed can be applied to complex images, and therefore these experiments can tell us about perception in art and marketing, as well as about machine vision

    Fixation durations in scene viewing:Modeling the effects of local image features, oculomotor parameters, and task

    Get PDF
    Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation’s duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general
    corecore